757 research outputs found

    Identification of Hysteresis Functions Using a Multiple Model Approach

    Get PDF
    This paper considers the identification of static hysteresis functions which describe phenomena in mechanical systems, piezoelectric actuators and materials. A solution based on a model with a parallel structure of elementary models (with switching) and the Interacting Multiple Model (IMM) approach is proposed. For each elementary model a separate IMM estimator is implemented. The estimated parameters represent a fusion of values from preset grids, weighted by the IMM mode probabilities. The estimated state of each elementary model is a fusion of the estimated states (from the separate Kalman filters) weighted by the IMM probabilities. The nonlinear identification problem is reduced to a linear one. Results from simulation experiments are presented

    Connectivity of Random 1-Dimensional Networks

    Get PDF
    An important problem in wireless sensor networks is to find the minimal number of randomly deployed sensors making a network connected with a given probability. In practice sensors are often deployed one by one along a trajectory of a vehicle, so it is natural to assume that arbitrary probability density functions of distances between successive sensors in a segment are given. The paper computes the probability of connectivity and coverage of 1-dimensional networks and gives estimates for a minimal number of sensors for important distributions

    Combined Feature-Level Video Indexing Using Block-Based Motion Estimation.

    Get PDF
    We describe a method for attaching content-based labels to video data using a weighted combination of low-level features (such as colour, texture, motion, etc.) estimated during motion analysis. Every frame of a video sequence is modeled using a fixed set of low-level feature attributes together with a set of corresponding weights using a block-based motion estimation technique. Indexing a new video involves an alternative scheme in which the weights of the features are first estimated and then classification is performed to determine the label corresponding to the video. A hierarchical architecture of increasingly complexity is used to achieve robust indexing of new videos. We explore the effect of different model parameters on performance and prove that the proposed method is effective using publicly available datasets

    Anomaly detection in video with Bayesian nonparametrics

    Get PDF
    A novel dynamic Bayesian nonparametric topic model for anomaly detection in video is proposed in this paper. Batch and online Gibbs samplers are developed for inference. The paper introduces a new abnormality measure for decision making. The proposed method is evaluated on both synthetic and real data. The comparison with a non-dynamic model shows the superiority of the proposed dynamic one in terms of the classification performance for anomaly detection

    Abnormal behaviour detection in video using topic modeling

    Get PDF
    The growth of the number of surveillance systems makes it is impossible to process data by human operators thereby autonomous algorithms are required in a decision-making procedure. A novel dynamic topic modeling approach for abnormal behaviour detection in video is proposed. Activities and behaviours in the scene are described by the topic model where temporal dynamics for behaviours is assumed. Here we implement Expectation-Maximisation algorithm for inference in the model and show in the experiments that it outperforms the Gibbs sampling inference scheme that is originally proposed in [1]

    Dual-Satellite Source Geolocation with Time and Frequency Offsets and Satellite Location Errors

    Get PDF
    This paper considers locating a static source on Earth using the time difference of arrival (TDOA) and frequency difference of arrival (FDOA) measurements obtained by a dual-satellite geolocation system. The TDOA and FDOA from the source are subject to unknown time and frequency offsets because the two satellites are imperfectly time-synchronized or frequency-locked. The satellite locations are not known accurately as well. To make the source position identifiable and mitigate the effect of satellite location errors, calibration stations at known positions are used. Achieving the maximum likelihood (ML) geolocation performance usually requires jointly estimating the source position and extra variables (i.e., time and frequency offsets as well as satellite locations), which is computationally intensive. In this paper, a novel closed-form geolocation algorithm is proposed. It first fuses the TDOA and FDOA measurements from the source and calibration stations to produce a single pair of TDOA and FDOA for source geolocation. This measurement fusion step eliminates the time and frequency offsets while taking into account the presence of satellite location errors. The source position is then found via standard TDOA-FDOA geolocation. The developed algorithm has low complexity and performance analysis shows that it attains the Cramér-Rao lower bound (CRLB) under Gaussian noises and mild conditions. Simulations using a challenging scenario with a short-baseline dual-satellite system verify the theoretical developments and demonstrate the good performance of the proposed algorithm

    Localization of multiple nodes based on correlated measurements and shrinkage estimation

    Get PDF
    Accurate covariance matrix estimation has applications in a wide range of disciplines. For many applications the estimated covariance matrix needs to be positive definite and hence invertible. When the number of data points is insufficient, the estimated sample covariance matrix has two fold disadvantages. Firstly, although it is unbiased, it consists of a large estimation error. Secondly, it is not positive definite. A shrinkage technique has been proposed in the fields of finance and life sciences to estimate the covariance matrix that is invertible and contains relatively a small estimation error variance. In this paper, we introduce the shrinkage covariance matrix concept in the area of multiple target localization in wireless networks with correlated measurements. For localization, we use the low cost received signal strength (RSS) measurements. Unlike most studies, where the links between sensor nodes (SNs) and targets nodes (TNs) are independent, we use a realistic model where these links are correlated. Optimization in location accuracy is achieved by weighting each link via the shrinkage covariance matrix. Simulation results show that using the estimated shrinkage covariance improves the location accuracy of the localization algorithm

    Traffic State Estimation via a Particle Filter with Compressive Sensing and Historical Traffic Data

    Get PDF
    In this paper we look at the problem of estimating traffic states within segments of road using a particle filter and traffic measurements at the segment boundaries. When there are missing measurements the estimation accuracy can decrease. We propose two methods of solving this problem by estimating the missing measurements by assuming the current measurements will approach the mean of the historical measurements from a suitable time period. The proposed solutions come in the form of an l1 norm minimisation and a relevance vector machine type optimisation. Test scenarios involving simulated and real data verify that an accurate estimate of the traffic measurements can be achieved. These estimated missing measurements can then be used to help to improve traffic state estimation accuracy of the particle filter without a significant increase in computation time. For the real data used this can be up to a 23.44% improvement in RMSE values

    Bayesian neural networks for sparse coding

    Get PDF
    Deep learning is actively used in the area of sparse coding. In current deep sparse coding methods uncertainty of predictions is rarely estimated, thus providing the results that lack the quantitative justification. Bayesian learning provides the way to estimate the uncertainty of predictions in neural networks (NNs) by imposing the prior distributions on weights, propagating the resulting uncertainty through the layers and computing the posterior distributions of predictions. We propose a novel method of propagating the uncertainty through the sparsity-promoiting layers of NNs for the first time. We design a Bayesian Learned Iterative Shrinkage-Thresholding network (BayesLIsTA). An efficient posterior inference algorithm based on probabilistic backpropagation is developed. Experiments on sparse coding show that the proposed framework provides both accurate predictions and sensible estimates of uncertainty in these predictions
    corecore